Lock-Free Optimization for Non-Convex Problems

نویسندگان

  • Shen-Yi Zhao
  • Gong-Duo Zhang
  • Wu-Jun Li
چکیده

Stochastic gradient descent (SGD) and its variants have attracted much attention in machine learning due to their efficiency and effectiveness for optimization. To handle largescale problems, researchers have recently proposed several lock-free strategy based parallel SGD (LF-PSGD) methods for multi-core systems. However, existing works have only proved the convergence of these LF-PSGD methods for convex problems. To the best of our knowledge, no work has proved the convergence of the LF-PSGD methods for nonconvex problems. In this paper, we provide the theoretical proof about the convergence of two representative LF-PSGD methods, Hogwild! and AsySVRG, for non-convex problems. Empirical results also show that both Hogwild! and AsySVRG are convergent on non-convex problems, which successfully verifies our theoretical results. Introduction Many machine learning models can be formulated as the following optimization problem:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Particle Swarm Optimization for Hydraulic Analysis of Water Distribution Systems

The analysis of flow in water-distribution networks with several pumps by the Content Model may be turned into a non-convex optimization uncertain problem with multiple solutions. Newton-based methods such as GGA are not able to capture a global optimum in these situations. On the other hand, evolutionary methods designed to use the population of individuals may find a global solution even for ...

متن کامل

A Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization

Many machine learning models, including those with non-smooth regularizers, can be formulated as consensus optimization problems, which can be solved by the alternating direction method of multipliers (ADMM). Many recent efforts have been made to develop asynchronous distributed ADMM to handle large amounts of training data. However, all existing asynchronous distributed ADMM methods are based ...

متن کامل

Solving a non-convex non-linear optimization problem constrained by fuzzy relational equations and Sugeno-Weber family of t-norms

Sugeno-Weber family of t-norms and t-conorms is one of the most applied one in various fuzzy modelling problems. This family of t-norms and t-conorms was suggested by Weber for modeling intersection and union of fuzzy sets. Also, the t-conorms were suggested as addition rules by Sugeno for so-called  $lambda$–fuzzy measures. In this paper, we study a nonlinear optimization problem where the fea...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Particle Swarm Optimization with Smart Inertia Factor for Combined Heat and Power Economic Dispatch

In this paper particle swarm optimization with smart inertia factor (PSO-SIF) algorithm is proposed to solve combined heat and power economic dispatch (CHPED) problem. The CHPED problem is one of the most important problems in power systems and is a challenging non-convex and non-linear optimization problem. The aim of solving CHPED problem is to determine optimal heat and power of generating u...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017